538 research outputs found

    Zernike velocity moments for sequence-based description of moving features

    No full text
    The increasing interest in processing sequences of images motivates development of techniques for sequence-based object analysis and description. Accordingly, new velocity moments have been developed to allow a statistical description of both shape and associated motion through an image sequence. Through a generic framework motion information is determined using the established centralised moments, enabling statistical moments to be applied to motion based time series analysis. The translation invariant Cartesian velocity moments suffer from highly correlated descriptions due to their non-orthogonality. The new Zernike velocity moments overcome this by using orthogonal spatial descriptions through the proven orthogonal Zernike basis. Further, they are translation and scale invariant. To illustrate their benefits and application the Zernike velocity moments have been applied to gait recognition—an emergent biometric. Good recognition results have been achieved on multiple datasets using relatively few spatial and/or motion features and basic feature selection and classification techniques. The prime aim of this new technique is to allow the generation of statistical features which encode shape and motion information, with generic application capability. Applied performance analyses illustrate the properties of the Zernike velocity moments which exploit temporal correlation to improve a shape's description. It is demonstrated how the temporal correlation improves the performance of the descriptor under more generalised application scenarios, including reduced resolution imagery and occlusion

    Automated Markerless Extraction of Walking People Using Deformable Contour Models

    No full text
    We develop a new automated markerless motion capture system for the analysis of walking people. We employ global evidence gathering techniques guided by biomechanical analysis to robustly extract articulated motion. This forms a basis for new deformable contour models, using local image cues to capture shape and motion at a more detailed level. We extend the greedy snake formulation to include temporal constraints and occlusion modelling, increasing the capability of this technique when dealing with cluttered and self-occluding extraction targets. This approach is evaluated on a large database of indoor and outdoor video data, demonstrating fast and autonomous motion capture for walking people

    Image and Volume Segmentation by Water Flow

    No full text
    A general framework for image segmentation is presented in this paper, based on the paradigm of water flow. The major water flow attributes like water pressure, surface tension and capillary force are defined in the context of force field generation and make the model adaptable to topological and geometrical changes. A flow-stopping image functional combining edge- and region-based forces is introduced to produce capability for both range and accuracy. The method is assessed qualitatively and quantitatively on synthetic and natural images. It is shown that the new approach can segment objects with complex shapes or weak-contrasted boundaries, and has good immunity to noise. The operator is also extended to 3-D, and is successfully applied to medical volume segmentation

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario

    On gait as a biometric: progress and prospects

    No full text
    There is increasing interest in automatic recognition by gait given its unique capability to recognize people at a distance when other biometrics are obscured. Application domains are those of any noninvasive biometric, but with particular advantage in surveillance scenarios. Its recognition capability is supported by studies in other domains such as medicine (biomechanics), mathematics and psychology which also suggest that gait is unique. Further, examples of recognition by gait can be found in literature, with early reference by Shakespeare concerning recognition by the way people walk. Many of the current approaches confirm the early results that suggested gait could be used for identification, and now on much larger databases. This has been especially influenced by DARPA’s Human ID at a Distance research program with its wide scenario of data and approaches. Gait has benefited from the developments in other biometrics and has led to new insight particularly in view of covariates. Equally, gait-recognition approaches concern extraction and description of moving articulated shapes and this has wider implications than just in biometrics

    On a shape adaptive image ray transform

    No full text
    A conventional approach to image analysis is to perform separately feature extraction at a low level (such as edge detection) and follow this with high level feature extraction to determine structure (e.g. by collecting edge points using the Hough transform. The original image Ray Transform (IRT) demonstrated capability to extract structures at a low level. Here we extend the IRT to add shape specificity that makes it select specific shapes rather than just edges, the new capability is achieved by addition of a single parameter that controls which shape is elected by the extended IRT. The extended approach can then perform low-and high-level feature extraction simultaneously. We show how the IRT process can be extended to focus on chosen shapes such as lines and circles. We confirm the new capability by application of conventional methods for exact shape location. We analyze performance with images from the Caltech-256 dataset and show that the new approach can indeed select chosen shapes. Further research could capitalize on the new extraction ability to extend descriptive capability

    The way we walk

    No full text
    Mark Nixon and John Carter reveal how developments in biometrics could mean the increasing use of biometric evidence such ear shape and gait to identify defendants

    Automatic Lumbar Vertebrae Segmentation in Fluoroscopic Images via Optimised Concurrent Hough Transform

    No full text
    Low back pain is a very common problem in the industrialised countries and its associated cost is enormous. Diagnosis of the underlying causes can be extremely difficult. Many studies have focused on mechanical disorders of the spine. Digital videofluoroscopy (DVF) was widely used to obtain images for motion studies. This can provide motion sequences of the lumbar spine, but the images obtained often suffer due to noise, exacerbated by the very low radiation dosage. Thus determining vertebrae position within the image sequence presents a considerable challenge. In this paper, we show how our new approach can automatically detect the positions and borders of vertebrae concurrently, relieving many of the problems experienced in other approaches. First, we use phase congruency to relieve difficulty associated with threshold selection in edge detection of the illumination variant DVF images. Then, our new Hough transform approach is applied to determine the moving vertebrae, concurrently. We include optimisation via a genetic algorithm as without it the extraction of moving multiple vertebrae is computationally daunting. Our results show that this new approach can indeed provide extractions of position and rotation which appear to be of sufficient quality to aid therapy and diagnosis of spinal disorders

    Lumbar Spine Location in Fluoroscopic Images by Evidence Gathering

    No full text
    Low back pain (LBP) is a very common problem and lumbar segmental instability is one of the causes. It is important to investigate lumbar spine movement in order to understand instability better and as an aid to diagnosis. Digital videofluoroscopy provides a method of quantifying the motion of individual vertebrae, but due to the relatively poor image quality, it is difficult and time consuming to locate landmarks manually, from which the kinematics can be calculated. Some semi-automatic approaches have already been developed but these are still time consuming and require some manual interaction. In this paper we apply the Hough transform (HT) to locate the lumbar spinal segments automatically. The HT is a powerful tool in computer vision and it has good performance in noise and partial occlusion. A recent arbitrary shape representation avoids problems inherent with tabular representations in the generalised HT (GHT) by describing shapes using a continuous formulation. The target shape is described by a set of Fourier descriptors, which vote in an accumulator space from which the object parameters of translation (including the x and y direction), rotation and scale can be determined. At present, this algorithm has been applied to the images of lumbar spine, and has been shown to provide satisfactory results. Further work will concentrate on reducing the computational time for real-time application, and on approaches to refine information at the apices, given initialisation by the new HT method
    corecore